Preliminary stuff
Libraries
library(plyr)
library(ggplot2)
library(lmerTest)
library(gridExtra)
library(grid)
library(knitr)
Data
# Loading data
load("data/mergedData/exp1Data.RData")
# Adding the normative object/location rating for each target and foil 1 & 2
combData$objLocTargetNorm <- 99
combData$objLocFoil1Norm <- 99
combData$objLocFoil2Norm <- 99
for(i in 1:dim(combData)[1]){
combData$objLocTargetNorm[i] <- combData[i, paste("loc", combData$targetLocation[i], sep = "")]
combData$objLocFoil1Norm[i] <- combData[i, paste("loc", combData$foil1Location[i], sep = "")]
combData$objLocFoil2Norm[i] <- combData[i, paste("loc", combData$foil2Location[i], sep = "")]
}
Functions
pValue <-function(x, sign = '='){
# To report a p-value in text.
if (inherits(x, "lm")){
s <- summary.lm(x)
x <- pf(s$fstatistic[1L], s$fstatistic[2L], s$fstatistic[3L], lower.tail = FALSE)
if(x > 1){
stop("There is no p-value greater than 1")
} else if(x < 0.001){
x.converted <- '< .001'
} else{
x.converted <- paste(sign,substr(as.character(round(x, 3)), 2,5))
}
} else {
if(x > 1){
stop("There is no p-value greater than 1")
} else if(x < 0.001){
x.converted <- '< .001'
} else{
x.converted <- paste(sign,substr(as.character(round(x, 3)), 2,5))
}
}
return(x.converted)
}
rValue <-function(x){
# To report a correlation coeffiecient in text
if (inherits(x, "lm")){
r.squared <- summary(x)$r.squared
x.converted <- paste('=',substr(as.character(round(r.squared, 3)), 2,5))
} else {
if (x < 0){
x.converted <- paste('= -',substr(as.character(abs(round(x, 3))), 2,5), sep = '')
} else {
x.converted <- paste('=',substr(as.character(abs(round(x, 3))), 2,5))
}
}
return(x.converted)
}
sigStars <- function(x){
# Adding stars to indicate significance
stars <- rep("", length(x))
stars[x < 0.1 & x > 0.05] <- '.' # trend
stars[x < 0.05 & x > 0.01] <- '*'
stars[x < 0.01 & x > 0.001] <- '**'
stars[x < 0.001 & x > 0.0001] <- '***'
return(stars)
}
createResultTable <- function(x){
# Creating a nice looking table
if(inherits(x, "glmerMod")){
# For glmer table
xTable <- summary(x)$coefficients
xTable <- data.frame(xTable)
xTable[, 1] <- round(xTable[, 1], 2)
xTable[, 2] <- round(xTable[, 2], 2)
xTable[, 3] <- round(xTable[, 3], 2)
xTable[, 4] <- round(xTable[, 4], 4)
xTable <- cbind(xTable, sigStars(xTable[, 4]))
names(xTable) <- c('Estimate', 'SE', 'Z', 'P', 'Sig')
} else if(inherits(x, 'merModLmerTest')){
xTable <- summary(x)$coefficients
xTable <- data.frame(xTable)
xTable[, 1] <- round(xTable[, 1], 2)
xTable[, 2] <- round(xTable[, 2], 2)
xTable[, 3] <- round(xTable[, 3], 2)
xTable[, 4] <- round(xTable[, 4], 2)
xTable[, 5] <- round(xTable[, 5], 4)
xTable <- cbind(xTable, sigStars(xTable[, 5]))
names(xTable) <- c('Estimate', 'SE', 'DF', 'T', 'P', 'Sig')
} else if(inherits(x, 'anova')){
if(attributes(x)$heading == "Analysis of Variance Table of type III with Satterthwaite \napproximation for degrees of freedom"){
# Only ANOVA on lmerTest models with Satterthwaite approximation
xTable <- data.frame(x)
xTable[, 1] <- round(xTable[, 1], 2)
xTable[, 2] <- round(xTable[, 2], 2)
xTable[, 3] <- round(xTable[, 3], 2)
xTable[, 4] <- round(xTable[, 4], 2)
xTable[, 5] <- round(xTable[, 5], 2)
xTable[, 6] <- round(xTable[, 6], 4)
xTable <- cbind(xTable, sigStars(xTable[, 6]))
names(xTable) <- c('SS', 'MSS', 'nDF', 'dDF', 'F', 'P', 'Sig')
} else {
xTable <- data.frame('######', 'No known model', '######')
names(xTable) <- c('%%%', '***', '&&&')
}
} else {
xTable <- data.frame('######', 'No known model', '######')
names(xTable) <- c('%%%', '***', '&&&')
}
return(xTable)
}
Analysing effect of expectancy on memory
Note that the subsequent plots are only illustrations of the models because they averaged data across objects, while all the statistical models are trial based and only trials go into the analyses, on which the participant did not indicated that they did not see the object in other words have item memory.
As for my specific predictions, this is a quote from a draft of my first year report, where I specify my hypotheses:
My main hypothesis is that there is an U-shape relationship between schema-expectancy and memory. However when recall is used as an assay to test memory, there is always the confound that participants might be biased to the more expected locations. In that respect, I predict to see that memory precision is enhanced for highly expected object/locations. However if the participants does not remember the location the participant is more likely to place the object close to an expected location (i.e. a spawn point) than to an unexpected location. In contrast, I predict a U-shaped relationship for performance in the 3AFC task (unexpected > neutral < expected) because the retrieval effects are reduced as all three options are chosen to have similar expectancy values.
Concerning the models used in the experiments: my plan/idea was originally to include random intercepts and slopes for the objects and for the participants, but in simulations I had problems to actually retrieve the true parameters (see here). Furthermore, my supervisor pointed out that adding random effects for the objects might take away the effect of their expectancy. Therefore, I, for now, decided to only include a random intercept for each participant. The quadratic term is included to look for the predicted U-shaped relationship. However of course, a significant quadratic term amount to a U-shaped relationship.
As you will see, sometimes I get the exact opposite result of what I have expected (see e.g. model 10). One of the confound for these models is that all the objects which are expected in a kitchen are at the end of the scale in terms of their object/location expectancy, but all, by definition, highly expected in a kitchen in general, which seems to be associated with lower memory performance. Therefore, I post-hoc decided to add the corresponding other forms of expectancy to the model as a covariate. There will also be a second experiment, where objects that are highly expected in kitchen over the whole range of object/location expectancy. All the other models expect the ones predicting the time to place the objects were pre-planned. I looked at this variable because in a recent study using immersive VR from Draschkow & Võ (2017) an interesting relationship between ‘scene grammar’ and object handling time was found.
# Preparation
# scaling for analysis
combDataScaled <- combData
combDataScaled$objLocTargetRating <- scale(combData$objLocTargetRating)
combDataScaled$targetRankPre <- scale(combData$targetRankPre)
combDataScaled$generalRatingNorm <- scale(combData$generalRatingNorm)
combDataScaled$generalRatingPost <- scale(combData$generalRatingPost)
combDataScaled$objLocTargetNorm <- scale(combDataScaled$objLocTargetNorm)
# Adding kitchen relevance. Note that 12 objects were choosen because they are expected in a kitchen, while the other 8 object were not expected in a kitchen.
objectAgg$expectedInKitchen <- 'low'
objectAgg[which(objectAgg$objNum < 13), 'expectedInKitchen'] <- 'highly'
3AFC models
# Plots
# Participant object/location expectancy versus 3AFC
plot6 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = afc)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (3AFC)',
x = "Post-ratings expectancy",
title = 'Model 1: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1.25), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))
plot7 <- ggplot(objectAgg, aes(x = targetRankPre, y = afc)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (3AFC)',
x = "Normative expectancy (ranked)",
title = 'Model 2: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1.25), xlim = c(0, 400), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot8 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = afc)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (3AFC)',
x = "Normative expectancy ",
title = 'Model 3: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1.25), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot9 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = afc)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (3AFC)',
x = "Normative expectancy",
title = 'Model 4: General expectancy') +
coord_cartesian(ylim = c(0, 1.25), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
grid.arrange(plot6, plot7, plot8, plot9, ncol = 2, nrow = 2)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
## [1] "1 = microwave" "2 = kitchen roll" "3 = saucepan"
## [4] "4 = toaster" "5 = bowl of fruits" "6 = tea pot"
## [7] "7 = knife" "8 = mixer" "9 = bread"
## [10] "10 = glass jug" "11 = mug" "12 = dishes"
## [13] "13 = towels" "14 = toy" "15 = pile of books"
## [16] "16 = umbrella" "17 = hat" "18 = helmet"
## [19] "19 = calendar" "20 = fan"
# Data
subAFC <- subset(combDataScaled, combDataScaled$resCon != 1)
# Models
# Participant object/location expectancy versus 3AFC
model1 <- glmer(accAFC ~ objLocTargetRating + I(objLocTargetRating*objLocTargetRating) + (1 | subNum),
data = subAFC,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model1))
| (Intercept) |
0.79 |
0.28 |
2.83 |
0.0046 |
** |
| objLocTargetRating |
-0.07 |
0.18 |
-0.39 |
0.6932 |
|
| I(objLocTargetRating * objLocTargetRating) |
0.01 |
0.21 |
0.05 |
0.9613 |
|
# Normative (ranked) object/location expectancy versus 3AFC
model2 <- glmer(accAFC ~ targetRankPre + I(targetRankPre*targetRankPre) + (1 | subNum),
data = subAFC,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model2))
| (Intercept) |
0.99 |
0.30 |
3.32 |
0.0009 |
*** |
| targetRankPre |
-0.30 |
0.18 |
-1.65 |
0.0981 |
. |
| I(targetRankPre * targetRankPre) |
-0.17 |
0.24 |
-0.73 |
0.4640 |
|
# Normative object/location expectancy versus 3AFC
model3 <- glmer(accAFC ~ objLocTargetNorm + I(objLocTargetNorm*objLocTargetNorm) + (1 | subNum),
data = subAFC,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model3))
| (Intercept) |
0.47 |
0.25 |
1.90 |
0.0579 |
. |
| objLocTargetNorm |
0.08 |
0.20 |
0.42 |
0.6714 |
|
| I(objLocTargetNorm * objLocTargetNorm) |
0.37 |
0.20 |
1.82 |
0.0681 |
. |
# Normative general expectancy versus 3AFC
model4 <- glmer(accAFC ~ generalRatingNorm + I(generalRatingNorm*generalRatingNorm) + (1 | subNum),
data = subAFC,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model4))
| (Intercept) |
1.10 |
0.34 |
3.23 |
0.0012 |
** |
| generalRatingNorm |
-0.64 |
0.29 |
-2.22 |
0.0267 |
* |
| I(generalRatingNorm * generalRatingNorm) |
-0.27 |
0.27 |
-1.01 |
0.3142 |
|
3AFC models with covariates
This is was decided post-hoc. They idea here was to adjust for the respective general or object/location expectancy. Another idea would be to control for answer time.
model5 <- glmer(accAFC ~ objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
generalRatingPost +
(1 | subNum), data = combDataScaled,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model5))
| (Intercept) |
0.45 |
0.26 |
1.70 |
0.0883 |
. |
| objLocTargetRating |
0.06 |
0.17 |
0.34 |
0.7326 |
|
| I(objLocTargetRating * objLocTargetRating) |
0.22 |
0.20 |
1.08 |
0.2795 |
|
| generalRatingPost |
-0.49 |
0.19 |
-2.62 |
0.0087 |
** |
model6 <- glmer(accAFC ~ targetRankPre +
I(targetRankPre*targetRankPre) +
generalRatingNorm +
(1 | subNum), data = combDataScaled,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model6))
| (Intercept) |
0.35 |
0.33 |
1.05 |
0.2932 |
|
| targetRankPre |
-0.04 |
0.16 |
-0.26 |
0.7963 |
|
| I(targetRankPre * targetRankPre) |
0.32 |
0.29 |
1.12 |
0.2631 |
|
| generalRatingNorm |
-0.58 |
0.23 |
-2.51 |
0.0119 |
* |
model7 <- glmer(accAFC ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
generalRatingNorm +
(1 | subNum), data = combDataScaled,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model7))
| (Intercept) |
0.48 |
0.25 |
1.95 |
0.0512 |
. |
| objLocTargetNorm |
-0.01 |
0.18 |
-0.05 |
0.9631 |
|
| I(objLocTargetNorm * objLocTargetNorm) |
0.18 |
0.18 |
0.98 |
0.3293 |
|
| generalRatingNorm |
-0.39 |
0.19 |
-2.06 |
0.0397 |
* |
model8 <- glmer(accAFC ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
objLocTargetNorm +
(1 | subNum), data = combDataScaled,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model8))
| (Intercept) |
0.98 |
0.31 |
3.13 |
0.0017 |
** |
| generalRatingNorm |
-0.67 |
0.26 |
-2.55 |
0.0108 |
* |
| I(generalRatingNorm * generalRatingNorm) |
-0.32 |
0.24 |
-1.33 |
0.1840 |
|
| objLocTargetNorm |
0.02 |
0.18 |
0.11 |
0.9099 |
|
3AFC interpretation
The most robust result is that 3AFC performance is much higher for objects that are generally unexpected in a kitchen. All forms of general expectancy are negatively associated with 3AFC perfomance. Notably though in model 3, there is a trend for a U-shaped relationship.
Recall models
The actual statistics are calculated only on trials, which were no no memory trials.
plot10 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = recall)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (Recall)',
x = "Post-ratings expectancy",
title = ' Model 9: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))
plot11 <- ggplot(objectAgg, aes(x = targetRankPre, y = recall)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (Recall)',
x = "Normative expectancy (ranked)",
title = 'Model 10: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(0, 400), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot12 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = recall)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (Recall)',
x = "Normative expectancy",
title = 'Model 11: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot13 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = recall)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (Recall)',
x = "Normative expectancy",
title = 'Model 12: General expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot14 <- ggplot(objectAgg, aes(x = generalRatingPost, y = recall)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Mean accuracy (Recall)',
x = "Post-ratings expectancy",
title = 'Model 13: General expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
grid.arrange(plot10, plot11, plot12, plot13, plot14, ncol = 2, nrow = 3)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
## [1] "1 = microwave" "2 = kitchen roll" "3 = saucepan"
## [4] "4 = toaster" "5 = bowl of fruits" "6 = tea pot"
## [7] "7 = knife" "8 = mixer" "9 = bread"
## [10] "10 = glass jug" "11 = mug" "12 = dishes"
## [13] "13 = towels" "14 = toy" "15 = pile of books"
## [16] "16 = umbrella" "17 = hat" "18 = helmet"
## [19] "19 = calendar" "20 = fan"
# Data
subRecall <- subset(combDataScaled, combDataScaled$recallNoMemory == 0)
# Participant object/location expectancy versus recall
model9 <- glmer(accRecall ~ objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model9))
| (Intercept) |
-0.81 |
0.39 |
-2.05 |
0.0404 |
* |
| objLocTargetRating |
0.20 |
0.20 |
0.99 |
0.3204 |
|
| I(objLocTargetRating * objLocTargetRating) |
0.15 |
0.25 |
0.63 |
0.5297 |
|
# Normative (ranked) object/location expectancy versus recall
model10 <- glmer(accRecall ~ targetRankPre +
I(targetRankPre*targetRankPre) +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model10))
| (Intercept) |
-0.17 |
0.40 |
-0.42 |
0.6741 |
|
| targetRankPre |
-0.14 |
0.20 |
-0.70 |
0.4846 |
|
| I(targetRankPre * targetRankPre) |
-0.51 |
0.28 |
-1.82 |
0.0693 |
. |
# Normative object/location expectancy versus recall
model11 <- glmer(accRecall ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model11))
| (Intercept) |
-0.87 |
0.40 |
-2.19 |
0.0282 |
* |
| objLocTargetNorm |
0.44 |
0.20 |
2.21 |
0.0272 |
* |
| I(objLocTargetNorm * objLocTargetNorm) |
0.18 |
0.21 |
0.87 |
0.3847 |
|
# Normative general expectancy versus recall
model12 <- glmer(accRecall ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
(1 | subNum), data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model12))
| (Intercept) |
-0.76 |
0.43 |
-1.77 |
0.0768 |
. |
| generalRatingNorm |
-0.77 |
0.30 |
-2.59 |
0.0095 |
** |
| I(generalRatingNorm * generalRatingNorm) |
0.11 |
0.29 |
0.38 |
0.7074 |
|
# Participant general expectancy versus recall
model13 <- glmer(accRecall ~ generalRatingPost +
I(generalRatingPost*generalRatingPost) +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model13))
| (Intercept) |
-0.58 |
0.41 |
-1.42 |
0.1559 |
|
| generalRatingPost |
-0.76 |
0.32 |
-2.37 |
0.0179 |
* |
| I(generalRatingPost * generalRatingPost) |
-0.08 |
0.29 |
-0.29 |
0.7693 |
|
Recall models with covariates
# Participant object/location expectancy versus recall
model14 <- glmer(accRecall ~
objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
generalRatingPost +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model14))
| (Intercept) |
-1.09 |
0.43 |
-2.55 |
0.0107 |
* |
| objLocTargetRating |
0.25 |
0.21 |
1.16 |
0.2469 |
|
| I(objLocTargetRating * objLocTargetRating) |
0.44 |
0.27 |
1.62 |
0.1042 |
|
| generalRatingPost |
-0.80 |
0.23 |
-3.45 |
0.0006 |
*** |
# Normative (ranked) object/location expectancy versus recall
model15 <- glmer(accRecall ~ targetRankPre +
I(targetRankPre*targetRankPre) +
generalRatingNorm +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model15))
| (Intercept) |
-1.28 |
0.55 |
-2.31 |
0.0210 |
* |
| targetRankPre |
0.08 |
0.22 |
0.38 |
0.7041 |
|
| I(targetRankPre * targetRankPre) |
0.62 |
0.42 |
1.48 |
0.1393 |
|
| generalRatingNorm |
-1.20 |
0.33 |
-3.58 |
0.0003 |
*** |
# Normative object/location expectancy versus recall
model16 <- glmer(accRecall ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
generalRatingNorm +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model16))
| (Intercept) |
-0.65 |
0.42 |
-1.55 |
0.1205 |
|
| objLocTargetNorm |
0.24 |
0.22 |
1.13 |
0.2588 |
|
| I(objLocTargetNorm * objLocTargetNorm) |
-0.03 |
0.23 |
-0.14 |
0.8899 |
|
| generalRatingNorm |
-0.78 |
0.24 |
-3.32 |
0.0009 |
*** |
# Normative general expectancy versus recall
model17 <- glmer(accRecall ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
objLocTargetNorm +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model17))
| (Intercept) |
-0.75 |
0.44 |
-1.72 |
0.0850 |
. |
| generalRatingNorm |
-0.72 |
0.30 |
-2.37 |
0.0178 |
* |
| I(generalRatingNorm * generalRatingNorm) |
0.08 |
0.30 |
0.28 |
0.7764 |
|
| objLocTargetNorm |
0.24 |
0.22 |
1.10 |
0.2727 |
|
# Participant general expectancy versus recall
model18 <- glmer(accRecall ~ generalRatingPost +
I(generalRatingPost*generalRatingPost) +
objLocTargetRating +
(1 | subNum),
data = subRecall,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 1)
kable(createResultTable(model18))
| (Intercept) |
-0.57 |
0.42 |
-1.36 |
0.1723 |
|
| generalRatingPost |
-0.77 |
0.32 |
-2.37 |
0.0177 |
* |
| I(generalRatingPost * generalRatingPost) |
-0.10 |
0.29 |
-0.34 |
0.7342 |
|
| objLocTargetRating |
0.16 |
0.21 |
0.78 |
0.4339 |
|
Recall interpretation
Model 9 and 10 show a very unexpected inverted U-shaped relationship. However this might be due to the fact that again objects that are generally unexpected in kitchen are associated with better memory because these are the ones which are in the middle of the object/location expectancy scale. Model 11 looks very similar to model 3. When controlled for general expectancy a lot of models do show an trend for significant quadratic term (model 14 and 15).
Euclidean distance models
plot15 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = euclideanDist)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Euclidean distance in vm',
x = "Post-ratings expectancy",
title = ' Model 19: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))
plot16 <- ggplot(objectAgg, aes(x = targetRankPre, y = euclideanDist)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Euclidean distance in vm',
x = "Normative expectancy (ranked)",
title = 'Model 20: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(0, 400), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot17 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = euclideanDist)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Euclidean distance in vm',
x = "Normative expectancy",
title = 'Model 21: Object/location expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot18 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = euclideanDist)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Euclidean distance in vm',
x = "Normative expectancy",
title = 'Model 22: General expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot19 <- ggplot(objectAgg, aes(x = generalRatingPost, y = euclideanDist)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Euclidean distance in vm',
x = "Post-ratings expectancy",
title = 'Model 23: General expectancy') +
coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
grid.arrange(plot15, plot16, plot17, plot18, plot19, ncol = 2, nrow = 3)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
## [1] "1 = microwave" "2 = kitchen roll" "3 = saucepan"
## [4] "4 = toaster" "5 = bowl of fruits" "6 = tea pot"
## [7] "7 = knife" "8 = mixer" "9 = bread"
## [10] "10 = glass jug" "11 = mug" "12 = dishes"
## [13] "13 = towels" "14 = toy" "15 = pile of books"
## [16] "16 = umbrella" "17 = hat" "18 = helmet"
## [19] "19 = calendar" "20 = fan"
# Model 19:
model19 <- lmer(euclideanDist ~ objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model19))
| (Intercept) |
1.84 |
0.28 |
12.21 |
6.54 |
0.0000 |
|
| objLocTargetRating |
-0.29 |
0.13 |
125.71 |
-2.27 |
0.0248 |
* |
| I(objLocTargetRating * objLocTargetRating) |
-0.34 |
0.16 |
128.94 |
-2.12 |
0.0355 |
* |
# Model 20:
model20 <- lmer(euclideanDist ~ targetRankPre +
I(targetRankPre*targetRankPre) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model20))
| (Intercept) |
1.52 |
0.31 |
13.74 |
4.87 |
0.0003 |
*** |
| targetRankPre |
-0.02 |
0.13 |
123.13 |
-0.17 |
0.8673 |
|
| I(targetRankPre * targetRankPre) |
-0.01 |
0.17 |
123.33 |
-0.04 |
0.9668 |
|
# Model 21:
model21 <- lmer(euclideanDist ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model21))
| (Intercept) |
1.42 |
0.29 |
10.46 |
4.85 |
0.0006 |
*** |
| objLocTargetNorm |
-0.22 |
0.12 |
122.84 |
-1.82 |
0.0714 |
. |
| I(objLocTargetNorm * objLocTargetNorm) |
0.09 |
0.13 |
122.90 |
0.73 |
0.4658 |
|
# Model 22:
model22 <- lmer(euclideanDist ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model22))
| (Intercept) |
1.71 |
0.32 |
12.92 |
5.39 |
0.0001 |
|
| generalRatingNorm |
-0.05 |
0.19 |
123.85 |
-0.28 |
0.7831 |
|
| I(generalRatingNorm * generalRatingNorm) |
-0.21 |
0.18 |
124.17 |
-1.18 |
0.2404 |
|
# Model 23:
model23 <- lmer(euclideanDist ~ generalRatingPost +
I(generalRatingPost*generalRatingPost) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model23))
| (Intercept) |
1.65 |
0.31 |
13.26 |
5.36 |
0.0001 |
|
| generalRatingPost |
-0.13 |
0.20 |
127.06 |
-0.64 |
0.5229 |
|
| I(generalRatingPost * generalRatingPost) |
-0.14 |
0.19 |
128.05 |
-0.76 |
0.4503 |
|
Euclidean distance models with covariates
# Model 24:
model24<- lmer(euclideanDist ~ objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
generalRatingPost +
(1 | subNum),
data = subRecall)
kable(createResultTable(model24))
| (Intercept) |
1.86 |
0.28 |
12.61 |
6.56 |
0.0000 |
|
| objLocTargetRating |
-0.30 |
0.13 |
124.70 |
-2.28 |
0.0244 |
* |
| I(objLocTargetRating * objLocTargetRating) |
-0.35 |
0.16 |
127.95 |
-2.15 |
0.0333 |
* |
| generalRatingPost |
0.05 |
0.14 |
123.51 |
0.39 |
0.6952 |
|
# Model 25:
model25 <- lmer(euclideanDist ~ targetRankPre +
I(targetRankPre*targetRankPre) +
generalRatingNorm +
(1 | subNum),
data = subRecall)
kable(createResultTable(model25))
| (Intercept) |
1.69 |
0.35 |
19.66 |
4.86 |
0.0001 |
|
| targetRankPre |
-0.06 |
0.13 |
122.08 |
-0.44 |
0.6616 |
|
| I(targetRankPre * targetRankPre) |
-0.19 |
0.24 |
122.38 |
-0.81 |
0.4172 |
|
| generalRatingNorm |
0.21 |
0.18 |
122.07 |
1.15 |
0.2506 |
|
# Model 26:
model26 <- lmer(euclideanDist ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
generalRatingNorm +
(1 | subNum),
data = subRecall)
kable(createResultTable(model26))
| (Intercept) |
1.40 |
0.30 |
11.11 |
4.71 |
0.0006 |
*** |
| objLocTargetNorm |
-0.20 |
0.13 |
122.05 |
-1.57 |
0.1197 |
|
| I(objLocTargetNorm * objLocTargetNorm) |
0.11 |
0.14 |
121.90 |
0.83 |
0.4085 |
|
| generalRatingNorm |
0.07 |
0.14 |
122.23 |
0.46 |
0.6489 |
|
# Model 27:
model27 <- lmer(euclideanDist ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
objLocTargetNorm +
(1 | subNum),
data = subRecall)
kable(createResultTable(model27))
| (Intercept) |
1.69 |
0.32 |
12.88 |
5.31 |
0.0001 |
|
| generalRatingNorm |
-0.10 |
0.19 |
122.88 |
-0.54 |
0.5914 |
|
| I(generalRatingNorm * generalRatingNorm) |
-0.19 |
0.18 |
123.11 |
-1.04 |
0.3023 |
|
| objLocTargetNorm |
-0.19 |
0.13 |
121.97 |
-1.44 |
0.1528 |
|
# Model 28:
model28 <- lmer(euclideanDist ~ generalRatingPost +
I(generalRatingPost*generalRatingPost) +
objLocTargetRating +
(1 | subNum),
data = subRecall)
kable(createResultTable(model28))
| (Intercept) |
1.63 |
0.31 |
13.02 |
5.29 |
0.0001 |
|
| generalRatingPost |
-0.12 |
0.20 |
125.92 |
-0.61 |
0.5431 |
|
| I(generalRatingPost * generalRatingPost) |
-0.13 |
0.19 |
126.95 |
-0.66 |
0.5080 |
|
| objLocTargetRating |
-0.22 |
0.13 |
123.39 |
-1.72 |
0.0878 |
. |
Euclidean distance interpretation
The same is true if the Euclidean distance is the dependent variable. Model 24 is interesting as it continues to show the signifcant quadratic effect even though the general expectancy is controlled for.
Answer time models
plot15 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = answerTime)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Time to place object in sec',
x = "Post-ratings expectancy",
title = ' Model 19: Object/location expectancy') +
coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))
plot16 <- ggplot(objectAgg, aes(x = targetRankPre, y = answerTime)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Time to place object in sec',
x = "Normative expectancy (ranked)",
title = 'Model 20: Object/location expectancy') +
coord_cartesian(ylim = c(0, 20), xlim = c(0, 400), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot17 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = answerTime)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Time to place object in sec',
x = "Normative expectancy",
title = 'Model 21: Object/location expectancy') +
coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot18 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = answerTime)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Time to place object in sec',
x = "Normative expectancy",
title = 'Model 22: General expectancy') +
coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
plot19 <- ggplot(objectAgg, aes(x = generalRatingPost, y = answerTime)) +
geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +
geom_smooth() +
labs(y = 'Time to place object in sec',
x = "Post-ratings expectancy",
title = 'Model 23: General expectancy') +
coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) +
theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")
grid.arrange(plot15, plot16, plot17, plot18, plot19, ncol = 2, nrow = 3)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
## [1] "1 = microwave" "2 = kitchen roll" "3 = saucepan"
## [4] "4 = toaster" "5 = bowl of fruits" "6 = tea pot"
## [7] "7 = knife" "8 = mixer" "9 = bread"
## [10] "10 = glass jug" "11 = mug" "12 = dishes"
## [13] "13 = towels" "14 = toy" "15 = pile of books"
## [16] "16 = umbrella" "17 = hat" "18 = helmet"
## [19] "19 = calendar" "20 = fan"
# Model 19:
model19 <- lmer(answerTime ~ objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model19))
| (Intercept) |
11.50 |
1.54 |
15.87 |
7.45 |
0.0000 |
|
| objLocTargetRating |
0.08 |
0.84 |
127.43 |
0.10 |
0.9234 |
|
| I(objLocTargetRating * objLocTargetRating) |
-1.09 |
1.01 |
129.99 |
-1.07 |
0.2848 |
|
# Model 20:
model20 <- lmer(answerTime ~ targetRankPre +
I(targetRankPre*targetRankPre) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model20))
| (Intercept) |
10.27 |
1.64 |
20.41 |
6.27 |
0.0000 |
|
| targetRankPre |
0.78 |
0.80 |
123.26 |
0.98 |
0.3306 |
|
| I(targetRankPre * targetRankPre) |
0.18 |
1.10 |
123.62 |
0.16 |
0.8697 |
|
# Model 21:
model21 <- lmer(answerTime ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model21))
| (Intercept) |
9.77 |
1.49 |
13.55 |
6.57 |
0.0000 |
|
| objLocTargetNorm |
-1.20 |
0.78 |
122.71 |
-1.54 |
0.1254 |
|
| I(objLocTargetNorm * objLocTargetNorm) |
0.65 |
0.82 |
122.84 |
0.80 |
0.4277 |
|
# Model 22:
model22 <- lmer(answerTime ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model22))
| (Intercept) |
10.39 |
1.60 |
19.42 |
6.48 |
0.0000 |
|
| generalRatingNorm |
2.12 |
1.16 |
124.57 |
1.83 |
0.0694 |
. |
| I(generalRatingNorm * generalRatingNorm) |
-0.11 |
1.11 |
125.13 |
-0.10 |
0.9215 |
|
#kable(createResultTable(anova(model22)))
# Model 23:
model23 <- lmer(answerTime ~ generalRatingPost +
I(generalRatingPost*generalRatingPost) +
(1 | subNum),
data = subRecall)
kable(createResultTable(model23))
| (Intercept) |
10.56 |
1.66 |
16.77 |
6.37 |
0.0000 |
|
| generalRatingPost |
1.81 |
1.25 |
128.58 |
1.45 |
0.1495 |
|
| I(generalRatingPost * generalRatingPost) |
-0.29 |
1.17 |
129.54 |
-0.24 |
0.8080 |
|
Answer time models with covariates
# Model 24:
model24<- lmer(answerTime ~ objLocTargetRating +
I(objLocTargetRating*objLocTargetRating) +
generalRatingPost +
(1 | subNum),
data = subRecall)
kable(createResultTable(model24))
| (Intercept) |
12.05 |
1.58 |
15.51 |
7.62 |
0.0000 |
|
| objLocTargetRating |
-0.01 |
0.81 |
125.92 |
-0.01 |
0.9898 |
|
| I(objLocTargetRating * objLocTargetRating) |
-1.83 |
1.02 |
128.90 |
-1.79 |
0.0765 |
. |
| generalRatingPost |
2.45 |
0.87 |
124.36 |
2.82 |
0.0055 |
** |
# Model 25:
model25 <- lmer(answerTime ~ targetRankPre +
I(targetRankPre*targetRankPre) +
generalRatingNorm +
(1 | subNum),
data = subRecall)
kable(createResultTable(model25))
| (Intercept) |
13.40 |
1.84 |
30.53 |
7.28 |
0.0000 |
|
| targetRankPre |
0.11 |
0.79 |
122.13 |
0.14 |
0.8854 |
|
| I(targetRankPre * targetRankPre) |
-3.24 |
1.43 |
122.66 |
-2.27 |
0.0251 |
* |
| generalRatingNorm |
3.82 |
1.08 |
122.15 |
3.53 |
0.0006 |
*** |
# Model 26:
model26 <- lmer(answerTime ~ objLocTargetNorm +
I(objLocTargetNorm*objLocTargetNorm) +
generalRatingNorm +
(1 | subNum),
data = subRecall)
kable(createResultTable(model26))
| (Intercept) |
8.94 |
1.48 |
14.85 |
6.04 |
0.0000 |
|
| objLocTargetNorm |
-0.50 |
0.80 |
122.12 |
-0.62 |
0.5371 |
|
| I(objLocTargetNorm * objLocTargetNorm) |
1.30 |
0.84 |
121.86 |
1.56 |
0.1224 |
|
| generalRatingNorm |
2.43 |
0.90 |
122.45 |
2.71 |
0.0076 |
** |
# Model 27:
model27 <- lmer(answerTime ~ generalRatingNorm +
I(generalRatingNorm*generalRatingNorm) +
objLocTargetNorm +
(1 | subNum),
data = subRecall)
kable(createResultTable(model27))
| (Intercept) |
10.34 |
1.61 |
19.57 |
6.43 |
0.0000 |
|
| generalRatingNorm |
2.00 |
1.18 |
123.65 |
1.69 |
0.0930 |
. |
| I(generalRatingNorm * generalRatingNorm) |
-0.05 |
1.12 |
124.08 |
-0.04 |
0.9676 |
|
| objLocTargetNorm |
-0.46 |
0.81 |
122.09 |
-0.56 |
0.5747 |
|
# Model 28:
model28 <- lmer(answerTime ~ generalRatingPost +
I(generalRatingPost*generalRatingPost) +
objLocTargetRating +
(1 | subNum),
data = subRecall)
kable(createResultTable(model28))
| (Intercept) |
10.58 |
1.66 |
16.82 |
6.37 |
0.0000 |
|
| generalRatingPost |
1.80 |
1.25 |
127.57 |
1.44 |
0.1533 |
|
| I(generalRatingPost * generalRatingPost) |
-0.32 |
1.18 |
128.57 |
-0.27 |
0.7898 |
|
| objLocTargetRating |
0.38 |
0.79 |
124.34 |
0.47 |
0.6369 |
|
Answer time interpretation
Here only models that include the respetive covariate are significant but with interesting resuls. For instance model 25 show that when controlled for general expectancy there is an inverted U-shaped relationship, which needs to be inverted if one assummes that shorter answer time are indicative of better memory.